Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs

نویسندگان

چکیده

We investigate non-adaptive methods of deep ReLU neural network approximation in Bochner spaces L2(U∞,X,μ) functions on U∞ taking values a separable Hilbert space X, where is either R∞ equipped with the standard Gaussian probability measure, or [−1,1]∞ Jacobi measure. Functions to be approximated are assumed satisfy certain weighted ℓ2-summability generalized chaos polynomial expansion coefficients respect measure μ. prove convergence rate this terms size approximating networks. These results then applied solution parametric elliptic PDEs random inputs for lognormal and affine cases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximation of high-dimensional parametric PDEs

Parametrized families of PDEs arise in various contexts such as inverse problems, control and optimization, risk assessment, and uncertainty quantification. In most of these applications, the number of parameters is large or perhaps even infinite. Thus, the development of numerical methods for these parametric problems is faced with the possible curse of dimensionality. This article is directed...

متن کامل

Universal Function Approximation by Deep Neural Nets with Bounded Width and ReLU Activations

This article concerns the expressive power of depth in neural nets with ReLU activations and bounded width. We are particularly interested in the following questions: what is the minimal width wmin(d) so that ReLU nets of width wmin(d) (and arbitrary depth) can approximate any continuous function on the unit cube [0, 1] aribitrarily well? For ReLU nets near this minimal width, what can one say ...

متن کامل

Deep Neural Network Approximation using Tensor Sketching

Deep neural networks are powerful learning models that achieve state-of-the-art performance on many computer vision, speech, and language processing tasks. In this paper, we study a fundamental question that arises when designing deep network architectures: Given a target network architecture can we design a “smaller” network architecture that “approximates” the operation of the target network?...

متن کامل

Taming the ReLU with Parallel Dither in a Deep Neural Network

Rectified Linear Units (ReLU) seem to have displaced traditional ‘smooth’ nonlinearities as activationfunction-du-jour in many – but not all deep neural network (DNN) applications. However, nobody seems to know why. In this article, we argue that ReLU are useful because they are ideal demodulators – this helps them perform fast abstract learning. However, this fast learning comes at the expense...

متن کامل

Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks—in terms of depth and number of weights—which is required for approximating classifier functions in an L-sense. As a model class, we consider the set E(R) of possibly discontinuous piecewise C functions f : [−1/2, 1/2] → R, where the different “smooth regions” of f are separated by C hypersurfaces. For given dimension d ≥ ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Complexity

سال: 2023

ISSN: ['1090-2708', '0885-064X']

DOI: https://doi.org/10.1016/j.jco.2023.101779